66 research outputs found

    The Valletta travel information service

    Get PDF
    This is an invited summary of the Valletta Travel Information Service (VaTIS) position paper (DINGLI & ATTARD 2016). In this paper, we highlight the main concepts behind the Travel Information Service currently being deployed in Valletta, the Capital City of Malta. The VaTIS system is capable of harvesting data from an existing road pricing system using automatic number plate recognition (ANPR) technology and beacons installed in all the streets of the city, thus creating a digital landscape of the City in which to process travel related information. The data is harvested from the various sensors via user smart phones and fed into modelling and simulation software, which is capable of predicting movement of cars around the city’s road network. The information is then used to calculate the free parking spaces around the city thus helping commuters reduce the cruising time spent searching for parking. Furthermore, users approaching the city can receive information in real-time thus providing them with invaluable information about the real state of traffic within the city which can be used to take informed decisions. They can opt to park outside the city if the parking spaces within the city walls are saturated or they can choose the entry point to the city, which will give them the highest probability of finding a parking space. The Geodesign and architecture at the heart of VaTIS is low cost thus making the same model easily replicable in other cities with minimal changes. In this paper, we will describe how the different components of VaTIS work and integrate together. We will also report on the first phase which has been running successfully for a number of years and which will be integrated with our system. Overall, the project aims to create a sustainable mobility model throughout the entire city by providing effective travel information with the help of crowd sourcing initiatives.peer-reviewe

    The citizen twin : designing the future of public service using artificial intelligence

    Get PDF
    Citizens are overwhelmed with the tsunami of data which engulfs them daily (Bawden and Robinson, 2009; Dingli and Seychell, 2015; Hemp, 2009). The promises of the information society fizzled out (Halal, 2008), and many organisations struggle to go through with their digital transformation (Barmuta et al., 2020; Gregersen, 2018; Khitskov et al., 2017). The information overload problem people face daily is sowing further confusion while making cooperation between individuals more difficult. In fact, in many cases, rather than lead to constructive dialogue, it is merely polarising opposing views and fuelling further friction (O’Callaghan, 2020; Seargeant and Tagg, 2019; Williams et al., 2015). People today rely on algorithms to view the world, but many find it challenging to build a coherent picture of reality and digest all this information.peer-reviewe

    Emergent realities for social wellbeing : environmental, spatial and social pathways

    Get PDF
    The environment changes are highly met worldwide, especially inside very crowded urban cities and metropolitan zones. Urban cities with high levels of visitors are subject for the environmental changes in addition to the committed resources to fulfill all the potential activities. The environmental changes apply to several societal levels, depending on the involvement of each in the wellbeing definition. While more citizens will come to urban areas, the environmental changes will change accordingly with their life styles. Urban cities all around the globe have to be prepared for massive changes in terms of sustainable development.peer-reviewe

    USEFul : a framework to mainstream web site usability through automated evaluation

    Get PDF
    A paradox has been observed whereby web site usability is proven to be an essential element in a web site, yet at the same time there exist an abundance of web pages with poor usability. This discrepancy is the result of limitations that are currently preventing web developers in the commercial sector from producing usable web sites. In this paper we propose a framework whose objective is to alleviate this problem by automating certain aspects of the usability evaluation process. Mainstreaming comes as a result of automation, therefore enabling a non-expert in the field of usability to conduct the evaluation. This results in reducing the costs associated with such evaluation. Additionally, the framework allows the flexibility of adding, modifying or deleting guidelines without altering the code that references them since the guidelines and the code are two separate components. A comparison of the evaluation results carried out using the framework against published evaluations of web sites carried out by web site usability professionals reveals that the framework is able to automatically identify the majority of usability violations. Due to the consistency with which it evaluates, it identified additional guideline-related violations that were not identified by the human evaluators.peer-reviewe

    Annotating the semantic web

    Get PDF
    The web of today has evolved into a huge repository of rich Multimedia content for human consumption. The exponential growth of the web made it possible for information size to reach astronomical proportions; far more than a mere human can manage, causing the problem of information overload. Because of this, the creators of the web(lO) spoke of using computer agents in order to process the large amounts of data. To do this, they planned to extend the current web to make it understandable by computer programs. This new web is being referred to as the Semantic Web. Given the huge size of the web, a collective effort is necessary to extend the web. For this to happen, tools easy enough for non-experts to use must be available. This thesis first proposes a methodology which semi-automatically labels semantic entities in web pages. The methodology first requires a user to provide some initial examples. The tool then learns how to reproduce the user's examples and generalises over them by making use of Adaptive Information Extraction (AlE) techniques. When its level of performance is good enough when compared to the user, it then takes over the process and processes the remaining documents autonomously. The second methodology goes a step further and attempts to gather semantically typed information from web pages automatically. It starts from the assumption that semantics are already available all over the web, and by making use of a number of freely available resources (like databases) combined with AlE techniques, it is possible to extract most information automatically. These techniques will certainly not provide all the solutions for the problems brought about with the advent of the Semantic Web. They are intended to provide a step forward towards making the Semantic Web a reality

    Explainable AI for Interpretable Credit Scoring

    Full text link
    With the ever-growing achievements in Artificial Intelligence (AI) and the recent boosted enthusiasm in Financial Technology (FinTech), applications such as credit scoring have gained substantial academic interest. Credit scoring helps financial experts make better decisions regarding whether or not to accept a loan application, such that loans with a high probability of default are not accepted. Apart from the noisy and highly imbalanced data challenges faced by such credit scoring models, recent regulations such as the `right to explanation' introduced by the General Data Protection Regulation (GDPR) and the Equal Credit Opportunity Act (ECOA) have added the need for model interpretability to ensure that algorithmic decisions are understandable and coherent. An interesting concept that has been recently introduced is eXplainable AI (XAI), which focuses on making black-box models more interpretable. In this work, we present a credit scoring model that is both accurate and interpretable. For classification, state-of-the-art performance on the Home Equity Line of Credit (HELOC) and Lending Club (LC) Datasets is achieved using the Extreme Gradient Boosting (XGBoost) model. The model is then further enhanced with a 360-degree explanation framework, which provides different explanations (i.e. global, local feature-based and local instance-based) that are required by different people in different situations. Evaluation through the use of functionallygrounded, application-grounded and human-grounded analysis show that the explanations provided are simple, consistent as well as satisfy the six predetermined hypotheses testing for correctness, effectiveness, easy understanding, detail sufficiency and trustworthiness.Comment: 19 pages, David C. Wyld et al. (Eds): ACITY, DPPR, VLSI, WeST, DSA, CNDC, IoTE, AIAA, NLPTA - 202

    Dialog systems and their inputs

    Get PDF
    One of the main limitations in existent domain-independent conversational agents is that the general and linguistic knowledge of these agents is limited to what the agents' developers explicitly defined. Therefore, a system which analyses user input at a deeper level of abstraction which backs its knowledge with common sense information will essentially result in a system that is capable of providing more adequate responses which in turn result in a better overall user experience. From this premise, a framework was proposed, and a working prototype was implemented upon this framework. These make use of various natural language processing tools, online and offline knowledge bases, and other information sources, to enable it to comprehend and construct relevant responses.peer-reviewe

    Tablets report

    Get PDF
    In January 2014, the Government of Malta launched the ‘One Tablet per Child’ pilot project whose aim is to foresee the introduction of computer tablets in primary schools. An expression of interest was also published in order to test different types of hardware and software solutions with the aim of collecting feedback from educators and students. As part of this initiative, the Faculty of Information and Communication Technology was requested to assist. In fact, an inter-departmental team was setup made up of academics from the Department of Intelligent Computer Systems and the Department of Computer Information Systems. These academics were entrusted with the task of analyzing the three major tablet platforms in order to create a coherent and impartial analysis, which can help during the selection of the ultimate platform. The result of this exercise is this document, which was presented to the committee responsible for the tablets project. Throughout this document, one can find a thorough discussion pertaining the positive and negative aspects of each platform; be it Android, iOS or Microsoft. Whilst praising the most positive features of each platform, the document also highlights the issues, which might arise when developing content for these operating systems (OSs) and the weaknesses, which currently exist. We also examined issues, which might arise when using these platforms. In particular, our analysis also takes into consideration the fact that the usage will happen in a primary classroom setting and thus, additional issues such as sturdiness of the device had to be considered. Even though we mentioned some examples, we did not really go into the merits of particular devices because the market is so fragmented that it would have been impossible to pinpoint specific models or brands. Being a highly volatile sector means that the information presented in this document can be considered correct at the time of writing however we are expecting major changes in the coming months which will definitely change the way in which we interact with computers forever. The document is well suited to help the committee get abreast with the latest offerings and future potential of each platform in order to allow them to take an informed decision. A decision, which will have a long lasting effect on the eventual success of the project and the ultimate wellbeing of our children.peer-reviewe

    Motivating learning through mobile interaction

    Get PDF
    The acquisition of any goal happens only with the correct dose of motivation instilled in the individual pursuing it. Mobile technology is at the same time providing us with different sensors and technology which allow us to measure valuable attributes around a person who is engaged in a learning experience. In this paper we will be studying what motivates an individual while finding methods on the mobile device which will reach this motivation. The socio-cultural background of the individual undergoing learning will also be brought into context by acting as one of the driving forces of the presented recommendation technique.peer-reviewe

    Next generation annotation interfaces for adaptive information extraction

    Get PDF
    The evolution of the Internet into the largest existent digital library is bringing about new challenges. One of the biggest problems is the location of information. The most promising approach seems to be performing searches semantically however this cannot work without semantically annotated documents. These documents are few and the manual annotation process to make them is both time consuming and error prone. To solve this problem Information Extraction (IE) technologies can be used to automatically annotate these documents, but be- fore doing so, IE tools require training examples. These examples are normally created manually by human annotators. Currently, there exist very few tools designed to support such people. This paper proposes a methodology aimed at supporting annotators by reducing the number of annotations required by an IE system therefore having effective learning. The whole methodology is implemented in the Melita system which will also be described in this paper. Finally enhancements to the existing methodology are being proposed in order to make IE accessible to a wider range of users, from inexperienced to expert users.peer-reviewe
    • …
    corecore